30 research outputs found

    Learning and consistency

    Get PDF
    In designing learning algorithms it seems quite reasonable to construct them in such a way that all data the algorithm already has obtained are correctly and completely reflected in the hypothesis the algorithm outputs on these data. However, this approach may totally fail. It may lead to the unsolvability of the learning problem, or it may exclude any efficient solution of it. Therefore we study several types of consistent learning in recursion-theoretic inductive inference. We show that these types are not of universal power. We give “lower bounds ” on this power. We characterize these types by some versions of decidability of consistency with respect to suitable “non-standard ” spaces of hypotheses. Then we investigate the problem of learning consistently in polynomial time. In particular, we present a natural learning problem and prove that it can be solved in polynomial time if and only if the algorithm is allowed to work inconsistently. 1

    Learning Recursive Functions Refutably

    Get PDF
    Learning of recursive functions refutably means that for every recursive function, the learning machine has either to learn this function or to refute it, i.e., to signal that it is not able to learn it. Three modi of making precise the notion of refuting are considered. We show that the corresponding types of learning refutably are of strictly increasing power, where already the most stringent of them turns out to be of remarkable topological and algorithmical richness. All these types are closed under union, though in different strengths. Also, these types are shown to be different with respect to their intrinsic complexity; two of them do not contain function classes that are “most difficult” to learn, while the third one does. Moreover, we present characterizations for these types of learning refutably. Some of these characterizations make clear where the refuting ability of the corresponding learning machines comes from and how it can be realized, in general. For learning with anomalies refutably, we show that several results from standard learning without refutation stand refutably. Then we derive hierarchies for refutable learning. Finally, we show that stricter refutability constraints cannot be traded for more liberal learning criteria

    On Learning of Functions Refutably

    Get PDF
    Learning of recursive functions refutably informally means that for every recursive function, the learning machine has either to learn this function or to refute it, that is to signal that it is not able to learn it. Three modi of making precise the notion of refuting are considered. We show that the corresponding types of learning refutably are of strictly increasing power, where already the most stringent of them turns out to be of remarkable topological and algorithmical richness. Furthermore, all these types are closed under union, though in different strengths. Also, these types are shown to be different with respect to their intrinsic complexity; two of them do not contain function classes that are “most difficult” to learn, while the third one does. Moreover, we present several characterizations for these types of learning refutably. Some of these characterizations make clear where the refuting ability of the corresponding learning machines comes from and how it can be realized, in general.For learning with anomalies refutably, we show that several results from standard learning without refutation stand refutably. From this we derive some hierarchies for refutable learning. Finally, we prove that in general one cannot trade stricter refutability constraints for more liberal learning criteria

    Learning by Erasing

    No full text
    Learning by erasing means the process of eliminating potential hypotheses from further consideration thereby converging to the least hypothesis never eliminated and this one must be a solution to the actual learning problem. The present paper deals with learnability by erasing of indexed families of languages from both positive data as well as positive and negative data. This refers to the following scenario. A family L of target languages and a hypothesis space for it are specified. The learner is fed eventually all positive examples (all labeled examples) of an unknown target language L chosen from L. The target language L is learned by erasing if the learner erases some set of possible hypotheses and the least hypothesis never erased correctly describes L. The capabilities of learning by erasing are investigated in dependence on the requirement of what sets of hypotheses have to be or may be erased, and in dependence of the choice of the hypothesis space. Class preserving learnin..

    Learning by Erasing

    No full text
    Learning by erasing means the process of eliminating potential hypotheses from further consideration thereby converging to the least hypothesis never eliminated and this one must be a solution to the actual learning problem. The present paper deals with learnability by erasing of indexed families L of languages from both positive data as well as positive and negative data. This refers to the following scenario. A family L of target languages and a hypothesis space for it are specified. The learner is fed eventually all positive examples (all labeled examples) of an unknown target language L chosen from L. The target language L is learned by erasing if the learner erases some set of possible hypotheses and the least hypothesis never erased correctly describes L. The capabilities of learning by erasing are investigated in dependence on the requirement of what sets of hypotheses have to be or may be, erased, and in dependence of the choice of the hypothesis space. Class prese..

    On Learning and Co-learning of Minimal Programs

    No full text

    On Learning and Co-learning of Minimal Programs

    No full text
    Freivalds, Karpinski and Smith [8] explored a special type of learning in the limit: identification of an unknown concept (function) by eliminating (erasing) all but one possible hypothesis (this type of learning is called co-learning). The motivation behind learning by erasing lies in the process of human and automated computer learning: often we can discard incorrect solutions much easier than to come up with the correct one. In Gödel numberings any learnable family can be learned by an erasing strategy. In this paper we concentrate on co-learning minimal programs. We show that co-learning of minimal programs, as originally defined is significantly weaker than learning minimal programs in Gödel numberings. In order to enhance the learning powe

    Abstract

    No full text
    This paper deals with two problems: 1) what makes languages to be learnable in the limit by natural strategies of varying hardness; 2) what makes classes of languages to be the hardest ones to learn. To quantify hardness of learning, we use intrinsic complexity based on reductions between learning problems. Two types of reductions are considered: weak reductions mapping texts (representations of languages) to texts, and strong reductions mapping languages to languages. For both types of reductions, characterizations of complete (hardest) classes in terms of their algorithmic and topological potentials have been obtained. To characterize the strong complete degree, we discovered a new and natural complete class capable of “coding ” any learning problem using density of the set of rational numbers. We have also discovered and characterized rich hierarchies of degrees of complexity based on “core ” natural learning problems. The classes in these hierarchies contain “multidimensional ” languages, where the information learned from one dimension aids to learn other dimensions. In one formalization of this idea, the grammars learned from the dimensions 1, 2,..., k specify the “subspace ” for the dimension k + 1, while the learning strategy for every dimension is predefined. In our other formalization, a “pattern ” learned from the dimension k specifies the learning strategy for the dimension k + 1. A number of open problems is discussed.
    corecore